Gradient Regularized Contrastive Learning for Continual Domain Adaptation

نویسندگان

چکیده

Human beings can quickly adapt to environmental changes by leveraging learning experience. However, adapting deep neural networks dynamic environments machine algorithms remains a challenge. To better understand this issue, we study the problem of continual domain adaptation, where model is presented with labelled source and sequence unlabelled target domains. The obstacles in are both shift catastrophic forgetting. We propose Gradient Regularized Contrastive Learning (GRCL) solve obstacles. At core our method, gradient regularization plays two key roles: (1) enforcing not harm discriminative ability features which can, turn, benefit adaptation domains; (2) constraining increase classification loss on old domains, enables preserve performance domains when an in-coming domain. Experiments Digits, DomainNet Office-Caltech benchmarks demonstrate strong approach compared state-of-the-art.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Gradient Episodic Memory for Continual Learning

One major obstacle towards artificial intelligence is the poor ability of models to quickly solve new problems, without forgetting previously acquired knowledge. To better understand this issue, we study the problem of learning over a continuum of data, where the model observes, once and one by one, examples concerning a sequence of tasks. First, we propose a set of metrics to evaluate models l...

متن کامل

Target contrastive pessimistic risk for robust domain adaptation

In domain adaptation, classifiers with information from a source domain adapt to generalize to a target domain. However, an adaptive classifier can perform worse than a non-adaptive classifier due to invalid assumptions, increased sensitivity to estimation errors or model misspecification. Our goal is to develop a domain-adaptive classifier that is robust in the sense that it does not rely on r...

متن کامل

Sample-oriented Domain Adaptation for Image Classification

Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. The conventional image processing algorithms cannot perform well in scenarios where the training images (source domain) that are used to learn the model have a different distribution with test images (target domain). Also, many real world applicat...

متن کامل

Domain Adaptation with Regularized Optimal Transport

We present a new and original method to solve the domain adaptation problem using optimal transport. By searching for the best transportation plan between the probability distribution functions of a source and a target domain, a non-linear and invertible transformation of the learning samples can be estimated. Any standard machine learning method can then be applied on the transformed set, whic...

متن کامل

$\ell_1$ Regularized Gradient Temporal-Difference Learning

In this paper, we study the Temporal Difference (TD) learning with linear value function approximation. It is well known that most TD learning algorithms are unstable with linear function approximation and off-policy learning. Recent development of Gradient TD (GTD) algorithms has addressed this problem successfully. However, the success of GTD algorithms requires a set of well chosen features,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i3.16370